When is there a representer theorem?

نویسندگان

چکیده

Abstract We consider a general regularised interpolation problem for learning parameter vector from data. The well-known representer theorem says that under certain conditions on the regulariser there exists solution in linear span of data points. This is at core kernel methods machine as it makes computationally tractable. Most literature deals only with sufficient theorems Hilbert spaces and shows being norm-based existence theorem. prove necessary reflexive Banach show any has to be essentially exist. Moreover, we illustrate why sense reflexivity minimal requirement function space. further if relies theorem, then independent fact determined by space alone. particular value generalising theory spaces.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

When Is There a Representer Theorem? Vector Versus Matrix Regularizers

We consider a general class of regularization methods which learn a vector of parameters on the basis of linear measurements. It is well known that if the regularizer is a nondecreasing function of the inner product then the learned vector is a linear combination of the input data. This result, known as the representer theorem, is at the basis of kernel-based methods in machine learning. In thi...

متن کامل

A Generalized Representer Theorem

Wahba’s classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel...

متن کامل

Characterizing the Representer Theorem

The representer theorem assures that kernel methods retain optimality under penalized empirical risk minimization. While a sufficient condition on the form of the regularizer guaranteeing the representer theorem has been known since the initial development of kernel methods, necessary conditions have only been investigated recently. In this paper we completely characterize the necessary and suf...

متن کامل

A representer theorem for deep neural networks

We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network c...

متن کامل

A representer theorem for deep kernel learning

In this paper we provide a representer theorem for a concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. This fundamental result serves as a first mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence of this new representer theorem, the corresponding infinite-dimensional m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Advances in Computational Mathematics

سال: 2021

ISSN: ['1019-7168', '1572-9044']

DOI: https://doi.org/10.1007/s10444-021-09877-4